这项工作提出了专门针对粒子探测器的低潜伏期图神经网络(GNN)设计的新型可重构体系结构。加速粒子探测器的GNN是具有挑战性的,因为它需要次微秒延迟才能在CERN大型强子撞机实验的级别1触发器中部署网络以进行在线事件选择。本文提出了一种自定义代码转换,并在基于互动网络的GNN中使用完全连接的图表中的矩阵乘法操作降低了强度,从而避免了昂贵的乘法。它利用了稀疏模式以及二进制邻接矩阵,并避免了不规则的内存访问,从而降低了延迟和硬件效率的提高。此外,我们引入了一种基于外部产品的基质乘法方法,该方法通过降低潜伏期设计的强度降低来增强。此外,引入了融合步骤,以进一步降低设计延迟。此外,提出了GNN特异性算法 - 硬件共同设计方法,该方法不仅找到了具有更好延迟的设计,而且在给定的延迟约束下发现了高精度的设计。最后,已经设计和开源了此低延迟GNN硬件体系结构的可自定义模板,该模板可以使用高级合成工具来生成低延迟的FPGA设计,并有效地利用资源。评估结果表明,我们的FPGA实施速度高24倍,并且消耗的功率比GPU实施少45倍。与我们以前的FPGA实施相比,这项工作的延迟降低了6.51至16.7倍。此外,我们的FPGA设计的延迟足以使GNN在亚微秒,实时撞机触发器系统中部署,从而使其能够从提高的精度中受益。
translated by 谷歌翻译
基于注意力的神经网络在许多AI任务中都普遍存在。尽管其出色的算法性能,但注意力机制和前馈网络(FFN)的使用仍需要过多的计算和内存资源,这通常会损害其硬件性能。尽管已经引入了各种稀疏变体,但大多数方法仅着重于缓解算法级别上的二次注意力缩放,而无需明确考虑将其方法映射到真实硬件设计上的效率。此外,大多数努力仅专注于注意机制或FFN,但没有共同优化这两个部分,导致当前的大多数设计在处理不同的输入长度时缺乏可扩展性。本文从硬件角度系统地考虑了不同变体中的稀疏模式。在算法级别上,我们提出了Fabnet,这是一种适合硬件的变体,它采用统一的蝴蝶稀疏模式来近似关注机制和FFN。在硬件级别上,提出了一种新颖的适应性蝴蝶加速器,可以在运行时通过专用硬件控件配置,以使用单个统一的硬件引擎加速不同的蝴蝶层。在远程 - ARENA数据集上,FabNet达到了与香草变压器相同的精度,同时将计算量减少10到66次,参数数量为2至22次。通过共同优化算法和硬件,我们的基于FPGA的蝴蝶加速器在归一化到同一计算预算的最新加速器上达到了14.2至23.2倍的速度。与Raspberry Pi 4和Jetson Nano上优化的CPU和GPU设计相比,我们的系统在相同的功率预算下的最大273.8和15.1倍。
translated by 谷歌翻译
深度神经网络(DNN)的算法 - 硬件共同设计的最新进展已经证明了它们在自动设计神经架构和硬件设计方面的潜力。然而,由于昂贵的培训成本和耗时的硬件实现,这仍然是一个充满挑战的优化问题,这使得对神经结构和硬件设计难以解答的巨大设计空间探索。在本文中,我们证明我们所提出的方法能够在帕累托前沿定位设计。这种功能由新颖的三相协同设计框架启用,具有以下新功能:(a)从硬件架构和神经结构的设计空间探索的DNN培训解耦,(b)提供硬件友好的神经结构空间通过考虑构造搜索单元的硬件特征,(c)采用高斯过程来预测准确性,延迟和功耗以避免耗时的合成和路由过程。与手动设计的Resnet101,Inceptionv2和MobileNetv2相比,我们可以在想象网数据集中获得高达3倍的准确度,高达5%的准确性。与其他最先进的共同设计框架相比,我们发现的网络和硬件配置可以达到更高的2%〜6%,精度为2倍〜26倍,延迟较高8.5倍。
translated by 谷歌翻译
神经网络在广泛的任务中展示了他们出色的表现。具体地,基于长短短期存储器(LSTM)单元格的复发架构表现出了在真实数据中模拟时间依赖性的优异能力。然而,标准的经常性架构无法估计其不确定性,这对于安全关键型应用如医学,这是必不可少的。相比之下,贝叶斯经常性神经网络(RNN)能够以提高的精度提供不确定性估计。尽管如此,贝叶斯的RNN是在计算上和记忆所要求的,尽管他们的优势尽管他们的实用性限制了他们的实用性。为了解决这个问题,我们提出了一种基于FPGA的硬件设计,以加速基于贝叶斯LSTM的RNN。为了进一步提高整体算法 - 硬件性能,提出了一种共同设计框架来探索贝叶斯RNN的最适合的算法 - 硬件配置。我们对医疗保健应用进行了广泛的实验,以证明我们的设计和框架的有效性的提高。与GPU实施相比,我们的FPGA的设计可以实现高达10倍的加速,能效率较高的近106倍。据我们所知,这是第一份针对FPGA上的贝叶斯RNN的加速的工作。
translated by 谷歌翻译
神经网络(NNS)已经在广泛的应用中证明了它们的潜力,例如图像识别,决策或推荐系统。然而,标准NNS无法捕获其模型不确定性,这对于包括医疗保健和自治车辆的许多安全关键应用至关重要。相比之下,贝叶斯神经网络(BNNS)能够通过数学接地表达他们预测中的不确定性。尽管如此,BNN尚未广泛用于工业实践,主要是由于其昂贵的计算成本和有限的硬件性能。这项工作提出了一种新的基于FPGA的硬件架构,可以通过Monte Carlo辍学加速BNN推断。与其他最先进的BNN加速器相比,所提出的加速器可以达到高达4倍的能量效率和9倍的计算效率。考虑到部分贝叶斯推断,提出了一种自动框架,探讨了硬件和算法性能之间的权衡。进行广泛的实验以证明我们所提出的框架可以有效地找到设计空间中的最佳点。
translated by 谷歌翻译
To facilitate research on text generation, this paper presents a comprehensive and unified library, TextBox 2.0, focusing on the use of pre-trained language models (PLMs). To be comprehensive, our library covers $13$ common text generation tasks and their corresponding $83$ datasets and further incorporates $45$ PLMs covering general, translation, Chinese, dialogue, controllable, distilled, prompting, and lightweight PLMs. We also implement $4$ efficient training strategies and provide $4$ generation objectives for pre-training new PLMs from scratch. To be unified, we design the interfaces to support the entire research pipeline (from data loading to training and evaluation), ensuring that each step can be fulfilled in a unified way. Despite the rich functionality, it is easy to use our library, either through the friendly Python API or command line. To validate the effectiveness of our library, we conduct extensive experiments and exemplify four types of research scenarios. The project is released at the link: https://github.com/RUCAIBox/TextBox.
translated by 谷歌翻译
Although pre-trained language models (PLMs) have shown impressive performance by text-only self-supervised training, they are found lack of visual semantics or commonsense, e.g., sizes, shapes, and colors of commonplace objects. Existing solutions often rely on explicit images for visual knowledge augmentation (requiring time-consuming retrieval or generation), and they also conduct the augmentation for the whole input text, without considering whether it is actually needed in specific inputs or tasks. To address these issues, we propose a novel visually-augmented fine-tuning approach that can be generally applied to various PLMs or NLP tasks, without using any retrieved or generated images, namely VAWI. Specifically, we first identify the visually-hungry words (VH-words) from input text via a token selector, where three different methods have been proposed, including syntax-, attention- and learning-based strategies. Then, we adopt a fixed CLIP text encoder to generate the visually-augmented representations of these VH-words. As it has been pre-trained by vision-language alignment task on the large-scale corpus, it is capable of injecting visual semantics into the aligned text representations. Finally, the visually-augmented features will be fused and transformed into the pre-designed visual prompts based on VH-words, which can be inserted into PLMs to enrich the visual semantics in word representations. We conduct extensive experiments on ten NLP tasks, i.e., GLUE benchmark, CommonsenseQA, CommonGen, and SNLI-VE. Experimental results show that our approach can consistently improve the performance of BERT, RoBERTa, BART, and T5 at different scales, and outperform several competitive baselines significantly. Our codes and data are publicly available at~\url{https://github.com/RUCAIBox/VAWI}.
translated by 谷歌翻译
Dense retrieval aims to map queries and passages into low-dimensional vector space for efficient similarity measuring, showing promising effectiveness in various large-scale retrieval tasks. Since most existing methods commonly adopt pre-trained Transformers (e.g. BERT) for parameter initialization, some work focuses on proposing new pre-training tasks for compressing the useful semantic information from passages into dense vectors, achieving remarkable performances. However, it is still challenging to effectively capture the rich semantic information and relations about passages into the dense vectors via one single particular pre-training task. In this work, we propose a multi-task pre-trained model, MASTER, that unifies and integrates multiple pre-training tasks with different learning objectives under the bottlenecked masked autoencoder architecture. Concretely, MASTER utilizes a multi-decoder architecture to integrate three types of pre-training tasks: corrupted passages recovering, related passage recovering and PLMs outputs recovering. By incorporating a shared deep encoder, we construct a representation bottleneck in our architecture, compressing the abundant semantic information across tasks into dense vectors. The first two types of tasks concentrate on capturing the semantic information of passages and relationships among them within the pre-training corpus. The third one can capture the knowledge beyond the corpus from external PLMs (e.g. GPT-2). Extensive experiments on several large-scale passage retrieval datasets have shown that our approach outperforms the previous state-of-the-art dense retrieval methods. Our code and data are publicly released in https://github.com/microsoft/SimXNS
translated by 谷歌翻译
We present 3DHumanGAN, a 3D-aware generative adversarial network (GAN) that synthesizes images of full-body humans with consistent appearances under different view-angles and body-poses. To tackle the representational and computational challenges in synthesizing the articulated structure of human bodies, we propose a novel generator architecture in which a 2D convolutional backbone is modulated by a 3D pose mapping network. The 3D pose mapping network is formulated as a renderable implicit function conditioned on a posed 3D human mesh. This design has several merits: i) it allows us to harness the power of 2D GANs to generate photo-realistic images; ii) it generates consistent images under varying view-angles and specifiable poses; iii) the model can benefit from the 3D human prior. Our model is adversarially learned from a collection of web images needless of manual annotation.
translated by 谷歌翻译
The input and output of most text generation tasks can be transformed to two sequences of tokens and they can be modeled using sequence-to-sequence learning modeling tools such as Transformers. These models are usually trained by maximizing the likelihood the output text sequence and assumes the input sequence and all gold preceding tokens are given during training, while during inference the model suffers from the exposure bias problem (i.e., it only has access to its previously predicted tokens rather gold tokens during beam search). In this paper, we propose MoCa ({\bf Mo}mentum {\bf Ca}libration) for text generation. MoCa is an online method that dynamically generates slowly evolving (but consistent) samples using a momentum moving average generator with beam search and MoCa learns to align its model scores of these samples with their actual qualities. Experiments on four text generation datasets (i.e., CNN/DailyMail, XSum, SAMSum and Gigaword) show MoCa consistently improves strong pre-trained transformers using vanilla fine-tuning and we achieve the state-of-the-art results on CNN/DailyMail and SAMSum datasets.
translated by 谷歌翻译